-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New "atmanlfv3inc" Rocoto job #2420
New "atmanlfv3inc" Rocoto job #2420
Conversation
Closing until changes made |
This PR, a companion to Global Workflow PR [#2420](NOAA-EMC/global-workflow#2420) changes the variational YAML for JEDI to write to cubed sphere history rather than the Gaussian grid. With the new changes to Global Workflow, the new gdas_fv3jedi_jediinc2fv3.x OOPS app will read the JEDI increment from the cubed sphere history, compute the FV3 increment, and interpolate/write it the the Gaussian grid. The only meaningful difference is that the internal calculations, namely computation of the hydrostatic layer thickness increment, will be computed on the native grid rather than on the Gaussian grid, before interpolation rather than after. This makes more sense physically. Eventually the FV3 increment will be written and read to/from cubed sphere history anyway.
Thank you both for your quick replies. Hopefully we can move this PR forward and get it into |
@DavidNew-NOAA Can you please update the pointer to gdasapp from this AM's commit after fc62ef5 and we can kick off the test again. |
@aerorahul hash updated |
Orion tests
Enable and run g-w JEDI ATM CI as follows:
The 2024022318 half cycle is complete. The 2024022400 gdas and enkfgdas cycle is complete. JEDI ATM jobs ran successfully in the gdas, enkfgdas, and gfs cycles
Enable and run GDASApp
All 45 tests pass.
|
@WalterKolczynski-NOAA , what else remains to be done before this PR can be merged into |
|
Thank you @TerrenceMcGuinness-NOAA for sharing what's going on. Does this problem prevent merger of this PR into |
Yeah, @RussTreadon-NOAA that's the role of the Branch Manger of course, but not sure why it would be a problem. |
* upstream/develop: Add CCPP suite and FASTER option to UFS build (NOAA-EMC#2521) New "atmanlfv3inc" Rocoto job (NOAA-EMC#2420) Hotfix to disable STALLED in CI as an error (NOAA-EMC#2523) Add restart on failure capability for the forecast executable (NOAA-EMC#2510) Update parm/transfer list files to match vetted GFSv16 set (NOAA-EMC#2517) Update gdas_gsibec_ver to 20240416 (NOAA-EMC#2497) Adding more cycles to gempak script gfs_meta_sa2.sh (NOAA-EMC#2518) Update gsi_enkf.sh hash to 457510c (NOAA-EMC#2514) Enable using the FV3_global_nest_v1 CCPP suite (NOAA-EMC#2512) CI Refactoring and STALLED case detection (NOAA-EMC#2488) Add C768 and C1152 S2SW test cases (NOAA-EMC#2509) Fix paths for refactored prepocnobs task (NOAA-EMC#2504)
Draft until NOAA-EMC/global-workflow#2420 is merged since that seems to be causing a failure of `test_gdasapp_atm_jjob_var_final`. Once merged I'll finish testing and make sure the cycling works. But in the mean time folks can review the structural changes of brining in the JCB repo. GW PR: NOAA-EMC/global-workflow#2477 --------- Co-authored-by: danholdaway <danholdaway@users.noreply.github.com> Co-authored-by: RussTreadon-NOAA <Russ.Treadon@noaa.gov>
This PR creates the atmensanlfv3inc job, the ensemble version of atmanlfv3inc, created in GW PR #2420. Its GDASApp companion PR is #[1104](NOAA-EMC/GDASApp#1104), and its JCB-GDAS companion PR is #[3](NOAA-EMC/jcb-gdas#3).
This PR, a companion to GDASApp PR #983, creates a new Rocoto job called "atmanlfv3inc" that computes the FV3 atmosphere increment from the JEDI variational increment using a JEDI OOPS app in GDASApp, called fv3jedi_fv3inc.x, that replaces the GDASApp Python script, jediinc2fv3.py, for the variational analysis. The "atmanlrun" job is renamed "atmanlvar" to better reflect the role it plays of running now one of two JEDI executables for the atmospheric analysis jobs.
Description
Previously, the JEDI variational executable would interpolate and write its increment, during the atmanlrun job, to the Gaussian grid, and then the python script, jediinc2fv3.py, would read it and then write the FV3 increment on the Gaussian grid during the atmanlfinal job. Following the new changes, the JEDI increment will be written directly to the cubed sphere. Then during the atmanlfv3inc job, the OOPS app will read it and compute the FV3 increment directly on the cubed sphere and write it out onto the Gaussian grid.
The reason for writing first to the cubed sphere grid is that otherwise the OOPS app would have to interpolate twice, once from Gaussian to cubed sphere before computing the increment and then back to the Gaussian, since all the underlying computations in JEDI are done on the native grid.
The motivation for this new app and job is that eventually we wish to transition all intermediate data to the native cubed sphere grid, and the OOPS framework allows us the flexibility to read and write to/from any grid format we wish by just changing the YAML configuration file rather than hardcoding. When we do switch to the cubed sphere, it will be an easy transition. Moreover, it the computations the OOPS app will be done with a compiled executable rather than an interpreted Python script, providing some performance increase.
Type of change
Change characteristics
How has this been tested?
It has been tested with a cycling experiment with JEDI in both Hera and Orion to show that it runs without issues, and I have compared the FV3 increments computed by the original and news codes. The delp and hydrostatic delz increments, the key increments produced during this step, differ by a relative error of 10^-7 and 10^-2 respectively. This difference is most likely due to the original python script doing its internal computation on the interpolated Gaussian grid, while the new OOPS app does its computations on the native cubed sphere before interpolating the the Gaussian grid.
Checklist